# Efficient reasoning

Qvikhr 3 1.7B Instruction Noreasoning
Apache-2.0
QVikhr-3-1.7B-Instruction-noreasoning is an instruction model based on Qwen/Qwen3-1.7B, trained on the Russian dataset GrandMaster2, and is specifically designed for efficient processing of Russian and English texts.
Large Language Model Transformers
Q
Vikhrmodels
274
10
Qwen3 8B AWQ
Apache-2.0
Qwen3-8B-AWQ is the latest generation of large language model with 8.2B parameters in the Tongyi Qianwen series, which uses AWQ 4-bit quantization technology to optimize inference efficiency. It supports the switching between thinking and non-thinking modes and has excellent reasoning, instruction-following, and intelligent agent capabilities.
Large Language Model Transformers
Q
Qwen
13.99k
2
Qwen2.5 VL 7B Instruct GPTQ Int4
Apache-2.0
Qwen2.5-VL-7B-Instruct-GPTQ-Int4 is an unofficial GPTQ-Int4 quantized version based on the Qwen2.5-VL-7B-Instruct model, supporting multimodal tasks from image-text to text.
Image-to-Text Transformers Supports Multiple Languages
Q
hfl
872
3
Wiroai Turkish Llm 9b
Turkish large language model based on Gemma-2-9b developed by WiroAI, specializing in dialogue generation tasks
Large Language Model Transformers Other
W
WiroAI
3,062
28
360zhinao 7B Base
Apache-2.0
360 Zhinao is an open-source large language model series developed by Qihoo 360, including base models and dialogue models with various context lengths, supporting both Chinese and English.
Large Language Model Transformers Supports Multiple Languages
3
qihoo360
90
5
SAM
Apache-2.0
A 7-billion-parameter small-scale reasoning model that outperforms larger models on multiple benchmarks
Large Language Model Transformers English
S
SuperAGI
138
33
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase